Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Phase shift model design for 6G reconfigurable intelligent surface
WANG Dan, LIANG Jiamin, LIU Jinzhi, ZHANG Youshou
Journal of Computer Applications    2021, 41 (9): 2694-2698.   DOI: 10.11772/j.issn.1001-9081.2020111836
Abstract442)      PDF (808KB)(333)       Save
In order to solve the problem of high energy consumption of relay communication and high difficulty in the construction of 5G base stations, the research on Reconfigurable Intelligent Surface (RIS) technology was introduced in 6G mobile communication. Aiming at the problem of characteristic loss and instability of the truncated Hadamard matrix and Discrete Fourier Transform (DFT) matrix when constructing intelligent surfaces, a new RIS phase shift model design scheme of constructing unitary matrix based on Hankel matrix and Toeplitz matrix was proposed. The characteristics of the unitary matrix were used to minimize the channel error and improve the reliability of the communication channel. The simulation results show that compared with that of non-RIS-assisted communication, the user receiving rate of RIS-assisted communication can obtain a gain of 1 (bit·s -1)/Hz when the number of RIS units is 15. With the increase of the number of RIS units, the gain of the user receiving rate will be more and more significant. When the number of RIS units is 4, compared to the method of using DFT matrix to construct intelligent reflecting surfaces, the methods of using the two obtained unitary matrices to construct reflecting surfaces have higher reliability, and can obtain the performance gain of about 0.5 dB.
Reference | Related Articles | Metrics
Survey of unmanned aerial vehicle cooperative control
MA Ziyu, HE Ming, LIU Zujun, GU Lingfeng, LIU Jintao
Journal of Computer Applications    2021, 41 (5): 1477-1483.   DOI: 10.11772/j.issn.1001-9081.2020081314
Abstract681)      PDF (1364KB)(1503)       Save
Unmanned Aerial Vehicle (UAV) cooperative control means that a group of UAVs based on inter-aircraft communication complete a common mission with rational division of labor and cooperation by using swarm intelligence as the core. UAV swarm is a multi-agent system in which many UAVs with certain independence ability carry out various tasks based on local rules. Compared with a single UAV, UAV swarm has great advantages such as high efficiency, high flexibility and high reliability. In view of the latest developments of UAV cooperative control technology in recent years, firstly, the application prospect of multi-UAV technology was illustrated by giving examples from the perspectives of civil use and military use. Then, the differences and development statuses of the three mainstream cooperative control methods:consensus control, flocking control and formation control were compared and analyzed. Finally, some suggestions on delay, obstacle avoidance and endurance of cooperative control were given to provide some help for the research and development of UAV collaborative control in the future.
Reference | Related Articles | Metrics
Generating CP-nets with bounded tree width based on Dandelion code
LI Congcong, LIU Jinglei
Journal of Computer Applications    2021, 41 (1): 112-120.   DOI: 10.11772/j.issn.1001-9081.2020060972
Abstract250)      PDF (1221KB)(755)       Save
Aiming at the problem of high time complexity of Conditional Preference networks (CP-nets) graph model in reasoning computation, a Generating CP-nets with Bounded Tree Width based on Dandelion code (BTW-CP-nets Gen) algorithm was proposed. First, through the principle of bidirectional mapping between Dandelion code and tree structure with tree width k ( k-tree), the decoding and encoding algorithms between Dandelion code and k-tree were derived to realize the one-to-one mapping between code and tree structure. Second, the k-tree was used to constrain the tree width of CP-nets structure, and the k-tree feature tree was used to obtain the directed acyclic graph structure of CP-nets. Finally, the bijection of discrete multi-valued functions was used to calculate the conditional preference table of each CP-nets node, and the dominant query test was executed to the generated bounded tree-width CP-nets. Theoretical analysis and experimental data show that, compared with the Pruffer code generating k-tree (Pruffer code) algorithm, BTW-CP-nets Gen algorithm has the running time on generating simple and complex structures reduced by 21.1% and 30.5% respectively,and the node traversal ratio of the graph model generated by BTW-CP-nets Gen in the dominant query is 18.48% and 29.03% higher on simple structure and complex structure respectively; the smaller the time consumed by BTW-CP-nets Gen algorithm, the higher the traversal node ratio of the dominant query. It can be seen that BTW-CP-nets Gen algorithm can effectively improve the algorithm efficiency in graph model reasoning.
Reference | Related Articles | Metrics
Optimal coalition structure generation in monotonous overlapping coalition
GUO Zhipeng, LIU Jinglei
Journal of Computer Applications    2021, 41 (1): 103-111.   DOI: 10.11772/j.issn.1001-9081.2020060973
Abstract265)      PDF (1073KB)(396)       Save
Aiming at the problem that Overlapping Coalition Structure Generation (OCSG) in the Framework of cooperative games with Overlapping Coalitions (OCF games) is difficult to solve, an effective algorithm based on greedy method was proposed. First, the OCF games with number of coalition k constraints (kOCF games) was used to limit the scale of the OCSG problem. Then, a similarity measure was introduced to represent the similarity between any two coalition structures, and the property of monotonicity was defined based on the similarity measure, which means that the higher the similarity between a coalition structure and optimal coalition structure, the greater the monotonicity value of this coalition. Finally, for the kOCF games with monotonicity, the method of inserting player numbers one by one to approximate the optimal coalition structure was used to design the Coalition Constraints Greed (CCG) algorithm to solve the given OCSG problem, and the complexity of CCG algorithm was proved to be O( n2 k+1) theoretically. The influences of different parameters and coalition value distribution on the performance of the proposed algorithm were analyzed and verified through experiments, and this algorithm was compared with the algorithm proposed by Zick et al. (ZICK Y, CHALKIADAKIS G, ELKIND E, et al. Cooperative games with overlapping coalitions:charting the tractability frontier. Artificial Intelligence, 2019, 271:74-97) in the terms such as constraint condition. The results show that when the maximum number of coalitions k is constrained by a constant, the search times of the proposed algorithm increase linearly with the number of agents. It can be seen that CCG algorithm is tractable with the fixed-parameter k and has better applicability.
Reference | Related Articles | Metrics
Brain network feature identification algorithm for Alzheimer's patients based on MRI image
ZHU Lin, YU Haitao, LEI Xinyu, LIU Jing, WANG Ruofan
Journal of Computer Applications    2020, 40 (8): 2455-2459.   DOI: 10.11772/j.issn.1001-9081.2019122105
Abstract475)      PDF (915KB)(341)       Save
In view of the problem of subjectivity and easy misdiagnosis in the artificial identification of Alzheimer's Disease (AD) through brain imaging, a method of automatic identification of AD by constructing brain network based on Magnetic Resonance Imaging (MRI) image was proposed. Firstly, MRI images were superimposed and were divided into structural blocks, and the Structural SIMilarity (SSIM) between any two structural blocks was calculated to construct the network. Then, the complex network theory was used to extract structural parameters, which were used as the input of machine learning algorithm to realize the AD automatic identification. The analysis found that the classification effect was optimal with two parameters, especially the node betweenness and edge betweenness were taken as the input. Further study found that the classification effect was optimal when MRI image was divided into 27 structural blocks, and the accuracy of weighted network and unweighted network was up to 91.04% and 94.51% respectively. The experimental results show that the complex network of structural similarity based on MRI block division can identify AD with higher accuracy.
Reference | Related Articles | Metrics
Focused crawler method combining ontology and improved Tabu search for meteorological disaster
LIU Jingfa, GU Yaoping, LIU Wenjie
Journal of Computer Applications    2020, 40 (8): 2255-2261.   DOI: 10.11772/j.issn.1001-9081.2019122238
Abstract387)      PDF (1325KB)(439)       Save
Considering the problems that the traditional focused crawler is easy to fall into local optimum and has insufficient topic description, a focused crawler method combining Ontology and Improved Tabu Search (On-ITS) was proposed. First, the topic semantic vector was calculated by ontology semantic similarity, and the Web page text feature vector was constructed by Hyper Text Markup Language (HTML) Web page text feature position weighting. Then, the vector space model was used to calculate the topic relevance of Web pages. On this basis, in order to analyze the comprehensive priority of link, the topic relevance of the link anchor text and the PR (PageRank) value of Web page to the link were calculated. In addition, to avoid the crawler falling into local optimum, the focused crawler based on ITS was designed to optimize the crawling queue. Experimental results of the focused crawler on the topics of rainstorm disaster and typhoon disaster show that, under the same environment, the accuracy of the On-ITS method is higher than those of the contrast algorithms by maximum of 58% and minimum of 8%, and other evaluation indicators of the proposed algorithm are also very excellent. On-ITS focused crawler method can effectively improve the accuracy of obtaining domain information and catch more topic-related Web pages.
Reference | Related Articles | Metrics
Object detection algorithm based on asymmetric hourglass network structure
LIU Ziwei, DENG Chunhua, LIU Jing
Journal of Computer Applications    2020, 40 (12): 3526-3533.   DOI: 10.11772/j.issn.1001-9081.2020050641
Abstract442)      PDF (1337KB)(789)       Save
Anchor-free deep learning based object detection is a mainstream single-stage object detection algorithm. An hourglass network structure that incorporates multiple layers of supervisory information can significantly improve the accuracy of the anchor-free object detection algorithm, but its speed is much lower than that of a common network at the same level, and the features of different scale objects will interfere with each other. In order to solve the above problems, an object detection algorithm based on asymmetric hourglass network structure was proposed. The proposed algorithm is not constrained by the shape and size when fusing the features of different network layers, and can quickly and efficiently abstract the semantic information of network, making it easier for the model to learn the differences between various scales. Aiming at the problem of object detection at different scales, a multi-scale output hourglass network structure was designed to solve the problem of feature mutual interference between different scale objects and refine the output detection results. In addition, a special non-maximum suppression algorithm for multi-scale outputs was used to improve the recall rate of the detection algorithm. Experimental results show that the AP50 index of the proposed algorithm on Common Objects in COntext (COCO) dataset reaches 61.3%, which is 4.2 percentage points higher than that of anchor-free network CenterNet. The proposed algorithm surpasses the original algorithm in the balance of accuracy and time, and is particularly suitable for real-time object detection in industry.
Reference | Related Articles | Metrics
Fast spectral clustering algorithm without eigen-decomposition
LIU Jingshu, WANG Li, LIU Jinglei
Journal of Computer Applications    2020, 40 (12): 3413-3422.   DOI: 10.11772/j.issn.1001-9081.2020061040
Abstract410)      PDF (1407KB)(510)       Save
The traditional spectral clustering algorithm needs too much time to perform eigen-decomposition when the number of samples is very large. In order to solve the problem, a fast spectral clustering algorithm without eigen-decomposition was proposed to reduce the time overhead by multiplication update iteration. Firstly, the Nyström algorithm was used for random sampling in order to establish the relationship between the sampling matrix and the original matrix. Then, the indicator matrix was updated iteratively based on the principle of multiplication update iteration. Finally, the correctness and convergence analysis of the designed algorithm were given theoretically. The proposed algorithm was tested on five widely used real datasets and three synthetic datasets. Experimental results on real datasets show that:the average Normalized Mutual Information (NMI) of the proposed algorithm is 0.45, which is improved by 12.5% compared with that of the k-means clustering algorithm; the computing time of the proposed algorithm achieves 61.73 s, which is decreased by 61.13% compared with that of the traditional spectral clustering algorithm; and the performance of the proposed algorithm is superior to that of the hierarchical clustering algorithm, which verify the effectiveness of the proposed algorithm.
Reference | Related Articles | Metrics
Blood pressure prediction with multi-factor cue long short-term memory model
LIU Jing, WU Yingfei, YUAN Zhenming, SUN Xiaoyan
Journal of Computer Applications    2019, 39 (5): 1551-1556.   DOI: 10.11772/j.issn.1001-9081.2018110008
Abstract397)      PDF (866KB)(462)       Save
Hypertension is an important hazard to health. Blood pressure prediction is of great importance to avoid grave consequences caused by sudden increase of blood pressure. Based on traditional Long Short-Term Memory (LSTM) network, a multi-factor cue LSTM model for both short-term prediction (predicting blood pressure for the next day) and long-term prediction (predicting blood pressure for the next several days) was proposed to provide early warning of undesirable change of blood pressure. Multi-factor cues used in blood pressure prediction model included time series data cues (e.g. heart rate) and contextual information cues (e.g. age, BMI (Body Mass Index), gender, temperature).The change characteristics of time series data and data features of other associated attributes were extracted in the blood pressure prediction. Environment factor was firstly considered in blood pressure prediction and multi-task learning method was used to help the model to capture the relation between data and improve the generalization ability of the model. The experimental results show that compared with traditional LSTM model and the LSTM with Contextual Layer (LSTM-CL) model, the proposed model decreases prediction error and prediction bias by 2.5%, 3.8% and 1.9%, 3.2% respectively for diastolic blood pressure, and reduces prediction error and prediction bias by 0.2%, 0.1% and 0.6%, 0.3% respectively for systolic blood pressure.
Reference | Related Articles | Metrics
Fast scale adaptive object tracking algorithm with separating window
YANG Chunde, LIU Jing, QU Zhong
Journal of Computer Applications    2019, 39 (4): 1145-1149.   DOI: 10.11772/j.issn.1001-9081.2018081821
Abstract501)      PDF (807KB)(247)       Save
In order to solve the problem of object drift caused by Kernelized Correlation Filter (KCF) tracking algorithm when scale changes, a Fast Scale Adaptive tracking of Correlation Filter (FSACF) was proposed. Firstly, a global gradient combination feature map based on salient color features was obtained by directly extracting features for the original frame image, reducing the effect of subsequent scale calculation on the performance. Secondly, the method of separating window was performed on the global feature map, adaptively selecting the scale and calculating the corresponding maximum response value. Finally, a defined confidence function was used to adaptively update the iterative template function, improving robustness of the model. Experimental result on video sets with different interference attributes show that compared with KCF algorithm, the accuracy of the FSACF algorithm by was improved 7.4 percentage points, and the success rate was increased by 12.8 percentage points; compared with the algorithm without global feature and separating window, the Frames Per Second was improved by 1.5 times. The experimental results show that the FSACF algorithm avoids the object drift when facing scale change with certain efficiency, and is superior to the comparison algorithms in accuracy and success rate.
Reference | Related Articles | Metrics
Bus arrival time prediction system based on Spark and particle filter algorithm
LIU Jing, XIAO Guanfeng
Journal of Computer Applications    2019, 39 (2): 429-435.   DOI: 10.11772/j.issn.1001-9081.2018081800
Abstract571)      PDF (1285KB)(312)       Save
To improve the accuracy of bus arrival time prediction, a Particle Filter (PF) algorithm with stream computing characteristic was used to establish a bus arrival time prediction model. In order to solve the problems of prediction error and particle optimization in the process of using PF algorithm, the prediction model was improved by introducing the latest bus speed and constructing observations, making it predict bus arrival time closer to the actual road conditions and simultaneously predict the arrival times of multiple buses. Based on the above model and Spark platform, a real-time bus arrival time prediction software system was implemented. Compared with actual results, for the off-peak period, the maximum absolute error was 207 s, and the mean absolute error was 71.67 s; for the peak period, the maximum absolute error was 270 s, and the mean absolute error was 87.61 s. The mean absolute error of the predicted results was within 2 min which was a recognized ideal result. The experimental results show that the proposed model and implementated system can accurately predict bus arrival time and meet passengers' actual demand.
Reference | Related Articles | Metrics
Preference feature extraction based on Nyström method
YANG Meijiao, LIU Jinglei
Journal of Computer Applications    2018, 38 (9): 2515-2522.   DOI: 10.11772/j.issn.1001-9081.2018020296
Abstract635)      PDF (1373KB)(300)       Save
To solve the problem of low feature extraction efficiency in movie scoring, a Nyström method combined with QR decomposition was proposed. Firstly, sampling was performed using an adaptive method, QR decomposition of the internal matrix was performed, and the decomposed matrix was recombined with the internal matrix for feature decomposition. The approximate process of Nyström method was closely related to the number of selected landmarks and the process of selecting marker points. A series of point markers were selected to ensure the similarity after sampling. The adaptive sampling method can ensure the accuracy of approximation. QR decomposition can ensure the stability of the matrix and improve the accuracy of the preference feature extraction. The higher the accuracy of the preference feature extraction, the higher the stability of the recommendation system and the higher the accuracy of the recommendation. Finally, a feature extraction experiment was performed on a dataset of actual audience ratings of movies. The movie rating dataset contained 480189 users and 17770 movies. The experimental results show that when extracting the same number of landmarks, accuracy and efficiency of the improved Nyström method are improved to a certain degree, the time complexity is reduced from original O( n 3) to O( nc 2) ( c<< n) compared to pre-sampling. Compared with the standard Nyström method, the error is controlled below 25%.
Reference | Related Articles | Metrics
Stateful group rekeying scheme with tunable collusion resistance
AO Li, LIU Jing, YAO Shaowen, WU Nan
Journal of Computer Applications    2018, 38 (5): 1372-1376.   DOI: 10.11772/j.issn.1001-9081.2017102413
Abstract414)      PDF (914KB)(319)       Save
Logical Key Hierarchy (LKH) protocol has been proved that O(log n) is the lower bound of the communication complexity when resisting complete collusion attacks. However, in some resource-constrained or commercial application environments, user still require the communication overhead below O(log n). Although Stateful Exclusive Complete Subtree (SECS) protocol has the characteristic of constant communication overhead, but it can only resist single-user attacks. Considering the willingness of users to sacrifice some security to reduce communication overhead, based on LKH which has the characteristic of strict confidentiality, and combined with SECS which has constant communication overhead, a Hybrid Stateful Exclusive Complete Subtree (H-SECS) was designed and implemented. The number of subgroups was configured by H-SECS according to the security level of application scenario to make an optimal tradeoff between communication overhead and collusion resistance ability. Theoretical analysis and simulation results show that, compared with LKH protocol and SECS protocol, the communication overhead of H-SECS can be regulated in the ranges between O(1) and O(log n).
Reference | Related Articles | Metrics
Building extraction from high-resolution remotely sensed imagery based on neighborhood total variation and potential histogram function
SHI Wenzao, LIU Jinqing
Journal of Computer Applications    2017, 37 (6): 1787-1792.   DOI: 10.11772/j.issn.1001-9081.2017.06.1787
Abstract488)      PDF (1093KB)(684)       Save
Concerning the problems of the low accuracy and high requirements for data in the existing building identification and extraction methods from high-resolution remotely sensed imagery, a new method based on Neighborhood Total Variation (NTV) and Potential Histogram Function (PHF) was proposed. Firstly, the value of weighted NTV likelihood function for each pixel of a remotely sensed imagery was calculated, the segmentation was done with region growing method, and the candidate buildings were selected from the segmentation results with the constraints of rectangular degree and aspect ratio. Then, the shadows were detected automatically. At last, shadows were processed with morphology operations. The buildings were extracted by computing the adjacency relationship of the processed shadows and candidate buildings, and the building boundaries were fitted with the minimum enclosing rectangle. For verifying the validity of the proposed method, nine representative sub-images were chosen from PLEIADES images covering Shenzhen for experiment. The experimental results show that, the average precision and recall of the proposed method are 97.71% and 84.21% for the object-based evaluation, and the proposed method has increased the overall performance F 1by more than 10% compared with two other building extraction methods based on level set and color invariant feature.
Reference | Related Articles | Metrics
Whole process optimized garbage collection for solid-state drives
FANG Caihua, LIU Jingning, TONG Wei, GAO Yang, LEI Xia, JIANG Yu
Journal of Computer Applications    2017, 37 (5): 1257-1262.   DOI: 10.11772/j.issn.1001-9081.2017.05.1257
Abstract1085)      PDF (1128KB)(526)       Save
Due to NAND flash' inherent restrictions like erase-before-write and a large erase unit, flash-based Solid-State Drives (SSD) demand garbage collection operations to reclaim invalid physical pages. However, the high overhead caused by garbage collection significantly decrease the performance and lifetime of SSD. Garbage collection performance will be more serious, especially when the data fragments of SSD are frequently used. Existing Garbage Collection (GC) algorithms only focus on some steps of the garbage collection operation, and none of them provids a comprehensive solution that takes into consideration all the steps of the GC process. On the basis of detailed analysis of the GC process, a whole process optimized garbage collection algorithm named WPO-GC (Whole Process Optimized Garbage Collection) was proposed, which integrated optimizations on each step of the GC in order to reduce the negative impact on normal read/write requests and SSD' lifetime at the greatest extent. Moreover, the WPO-GC was implemented on SSDsim which is an open source SSD simulator to evaluate its efficiency. The experimental results show that the proposed algorithm can decreases read I/O response time by 20%-40% and write I/O response time by 17%-40% respectively, and balance wear nearly 30% to extend the lifetime, compared with typical GC algorithm.
Reference | Related Articles | Metrics
Improving feature selection and matrix recovery ability by CUR matrix decomposition
LEI Hengxin, LIU Jinglei
Journal of Computer Applications    2017, 37 (3): 640-646.   DOI: 10.11772/j.issn.1001-9081.2017.03.640
Abstract573)      PDF (1235KB)(424)       Save
To solve the problem that users and products can not be accurately selected in large data sets, and the problem that user behavior preference can not be predicted accurately, a new method of CUR (Column Union Row) matrix decomposition was proposed. A small number of columns were selected from the original matrix to form the matrix C, and a small number of rows were selected to form the matrix R. Then, the matrix U was constructed by Orthogonal Rotation (QR) matrix decomposition. The matrixes C and R were feature matrixes of users and products respectively, which were composed of real data, and enabled to reflect the detailed characters of both users as well as products. In order to predict behavioral preferences of users accurately, the authors improved the CUR algorithm in this paper, endowing it with greater stability and accuracy in terms of matrix recovery. Lastly, the experiment based on real dataset (Netflix dataset) indicates that, compared with traditional singular value decomposition, principal component analysis and other matrix decomposition methods, the CUR matrix decomposition algorithm has higher accuracy as well as better interpretability in terms of feature selection, as for matrix recovery, the CUR matrix decomposition also shows superior stability and accuracy, with a preciseness of over 90%. The CUR matrix decomposition has a great application value in the recommender system and traffic flow prediction.
Reference | Related Articles | Metrics
Conditional preference mining based on MaxClique
TAN Zheng, LIU JingLei, YU Hang
Journal of Computer Applications    2017, 37 (11): 3107-3114.   DOI: 10.11772/j.issn.1001-9081.2017.11.3107
Abstract456)      PDF (1274KB)(545)       Save
In order to solve the problem that conditional constraints (context constraints) for personalized queries in database were not fully considered, a constraint model was proposed where the context i +≻i-| X means that the user prefers i + than i - based on the constraint of context X. Association rules mining algorithm based on MaxClique was used to obtain user preferences, and Conditional Preference Mining (CPM) algorithm combined with context obtained preference rules was proposed to obtain user preferences. The experimental results show that the context preference mining model has strong preference expression ability. At the same time, under the different parameters of minimum support, minimum confidence and data scale, the experimental results of preferences mining algorithm of CPM compared with Apriori algorithm and CONTENUM algorithm show that the proposed CPM algorithm can obviously improve the generation efficiency of user preferences.
Reference | Related Articles | Metrics
Improvement of sub-pixel morphological anti-aliasing algorithm
LIU Jingrong, DU Huimin, DU Qinqin
Journal of Computer Applications    2017, 37 (10): 2871-2874.   DOI: 10.11772/j.issn.1001-9081.2017.10.2871
Abstract480)      PDF (815KB)(389)       Save
Since Sub-pixel Morphological Anti-Aliasing (SMAA) algorithm extracts images with less contour and needs larger storage, an improved algorithm for SMAA was presented.In the improved algorithm, the multiplication of luminance of a pixel and an adjustable factor was regarded as a dynamic threshold, which was used to decide whether the pixel is a boundary pixel. Compared with fixed threshold for boundary decision in SMAA, the dynamic threshold is stricter for deciding whether a pixel is a boundary one, so the presented algorithm can extract more boundaries. Based on the analysis of different morphological models and used storage, some redundant storages were merged so as to reduce the size of memory. The algorithm was implemented by Microsoft DirectX SDK and HLSL under Windows 7. The experimental results show that the proposed algorithm can extract clearer boundaries and the size of the memory is reduced by 51.93%.
Reference | Related Articles | Metrics
Analysis algorithm of electroencephalogram signals for epilepsy diagnosis based on power spectral density and limited penetrable visibility graph
WANG Ruofan, LIU Jing, WANG Jiang, YU Haitao, CAO Yibin
Journal of Computer Applications    2017, 37 (1): 175-182.   DOI: 10.11772/j.issn.1001-9081.2017.01.0175
Abstract669)      PDF (1242KB)(584)       Save
Focused on poor robustness to noise of the Visibility Graph (VG) algorithm, an improved Limited Penetrable Visibility Graph (LPVG) algorithm was proposed. LPVG algorithm could map time series into networks by connecting the points of time series which satisfy the certain conditions based on the visibility criterion and the limited penetrable distance. Firstly, the performance of LPVG algorithm was analyzed. Secondly, LPVG algorithm was combined with Power Spectrum Density (PSD) to apply to the automatic identification of epileptic ElectroEncephaloGram (EEG) before, during and after the seizure. Finally, the characteristic parameters of the LPVG network in the three states were extracted to study the influence of epilepsy seizures on the network topology. The simulation results show that compared with VG and Horizontal Visibility Graph (HVG), although LPVG had a high time complexity, it had strong robustness to noise in the signal:when mapping the typical periodic, random, fractal and chaos time series into networks by LPVG, it was found that as the noise intensity increased, the fluctuation rates of clustering coefficient by LPVG network were always the lowest, respectively 6.73%, 0.05%, 0.99% and 3.20%. By the PSD and LPVG analysis, it was found that epilepsy seizure had great influence on the brain energy. PSD was obviously enhanced in the delta frequency band, and significantly reduced in the theta frequency band; the topological structure of the LPVG network changed during the seizure, characterized by the independent enhanced network module, increased average path length and decreased graph index complexity. The PSD and LPVG applied in this paper could be taken as an effective measure to characterize the abnormality of the energy distribution and topological structure of single EEG signal channel, which would provide help for the pathological study and clinical diagnosis of epilepsy.
Reference | Related Articles | Metrics
Fast content distribution method of integrating P2P technology in cloud platform
LIU Jing, ZHAO Wenju
Journal of Computer Applications    2017, 37 (1): 31-36.   DOI: 10.11772/j.issn.1001-9081.2017.01.0031
Abstract591)      PDF (999KB)(428)       Save
The HyperText Transfer Protocol (HTTP) is usually adopted in the content distribution process for data transferring in cloud storage service. When large number of users request to download the same file from the cloud storage server in a short time, the cloud server bandwidth pressure becomes so large, and further the download becomes very slow. Aiming at this problem, the P2P technology was integrated into fast content distribution for cloud platform, and a dynamic protocol conversion mechanism was proposed to achieve fast and better content distribution process. Four protocol conversion metrics, including user type, service quality, time yield and bandwidth gains, were selected, and OpenStack cloud platform was utilized to realize the proposed protocol conversion method. Compared with the pure HTTP protocol or P2P downloading method, the experimental results show that the proposed method can guarantee client users obtaining less download time, and the bandwidth of service provider is saved effectively as there are many P2P clients.
Reference | Related Articles | Metrics
Mining Ceteris Paribus preference from preference database
XIN Guanlin, LIU Jinglei
Journal of Computer Applications    2016, 36 (8): 2092-2098.   DOI: 10.11772/j.issn.1001-9081.2016.08.2092
Abstract375)      PDF (1198KB)(450)       Save
Focusing on the issue that the traditional recommendation system requires users to give a clear preference matrix (U-I matrix) and then uses automation technology to capture the user preferences, a method for mining preference information of Agent from preference database was introduced. From the perspective of knowledge discovery, a k order preference mining algorithm named kPreM was proposed according to the Ceteris Paribus rules (CP rules). The k order CP rules were used to prune the information in the preference database, which decreased the database scanning times and increased the efficiency of mining preference. Then a general graph model named CP-nets (Conditional Preference networks) was used as a tool to reveal that the user preferences can be approximated by the corresponding CP-nets. The theoretical analysis and simulation results show that the user preferences are conditional preferences. In addition, the xcavation of CP-nets preference model provides a theoretical basis for designing personalized recommendation system.
Reference | Related Articles | Metrics
Info-association topology based social relationship mining on Internet
LIU Jinwen, XING Kai, RUI Weikang, ZHANG Liping, ZHOU Hui
Journal of Computer Applications    2016, 36 (7): 1875-1880.   DOI: 10.11772/j.issn.1001-9081.2016.07.1875
Abstract540)      PDF (1000KB)(420)       Save
To solve the problems of needing labeling a great number of training data and pre-defining relation types in relation extraction methods based on supervised learning, a method for personal relation extraction by constructing the correlation network based on word co-occurrence information and performing graph clustering analysis on the correlation network was proposed. Firstly, 500 highly related person pairs for the research of relation extraction were gotten from the news title data. Secondly, the news data which contained related person pairs were crawled and performed pre-processing, and the keywords in the sentences which contained person pairs were gotten by the Term Frequency-Inverse Document Frequency (TF-IDF). Thirdly, the correlation between the words was acquired by the words co-occurrence information, and the key-words correlation network was constructed. Finally, the personal relations were acquired by the graph clustering analysis on the correlation network. In the relation extraction experiments, compared with the traditional algorithm of Chinese relation extraction based on word co-occurrence and pattern matching technology, the precision, recall and F-score of the proposed method were improved by 5.5, 3.7 and 4.4 percentage points respectively. The experimental results show that the proposed algorithm can effectively extract abundant and high-quality personal relation data from news data without labeling training data.
Reference | Related Articles | Metrics
Estimation algorithm of switching speech power spectrum for automatic speech recognition system
LIU Jingang, ZHOU Yi, MA Yongbao, LIU Hongqing
Journal of Computer Applications    2016, 36 (12): 3369-3373.   DOI: 10.11772/j.issn.1001-9081.2016.12.3369
Abstract608)      PDF (922KB)(449)       Save
In order to solve the poor robust problem of Automatic Speech Recognition (ASR) system in noisy environment, a new estimation algorithm of switching speech power spectrum was proposed. Firstly, based on the assumption of the speech spectral amplitude was better modelled for a Chi distribution, a modified estimation algorithm of speech power spectrum based on Minimum Mean Square Error (MMSE) was proposed. Then incorporating the Speech Presence Probability (SPP), a new MMSE estimator based on SPP was obtained. Next, the new approach and the conventional Wiener filter were combined to develop a switch algorithm. With the heavy noise environment, the modified MMSE estimator was used to estimate the clean speech power spectrum; otherwise, the Wiener filter was employed to reduce calculating amount. The final estimation algorithm of switching speech power spectrum for ASR system was obtained. The experimental results show that,compared with the traditional MMSE estimator with Rayleigh prior, the recognition accurate of the proposed algorithm was averagely improved by 8 percentage points in various noise environments. The proposed algorithm can improve the robustness of the ASR system by removing the noise, and reduce the computational cost.
Reference | Related Articles | Metrics
Image compressive sensing reconstruction via total variation and adaptive low-rank regularization
LIU Jinlong, XIONG Chengyi, GAO Zhirong, ZHOU Cheng, WANG Shuxian
Journal of Computer Applications    2016, 36 (1): 233-237.   DOI: 10.11772/j.issn.1001-9081.2016.01.0233
Abstract552)      PDF (789KB)(555)       Save
Aiming at the problem that collaborative sparse image Compressive Sensing (CS) reconstruction based on fixed transform bases can not adequately exploit the self similarity of images, an improved reconstruction algorithm combining the Total Variation (TV) with adaptive low-rank regularization was proposed in this paper. Firstly, the similar patches were found by using image block matching method and formed into nonlocal similar patch groups. Then, the weighted low-rank approximation for nonlocal similar patch groups was adopted to replace the 3D wavelet transform filtering used in collaborative sparse representation. Finally, the regularization term of combining the gradient sparsity with low-rank prior of nonlocal similarity patch groups was embedded to reconstruction model, which is solved by Alternating Direction Multiplier Method (ADMM) to obtain the reconstructed image. The experimental results show that, in comparison with the Collaborative Sparse Recovery (RCoS) algorithm, the proposed method can increase the Peak Signal-to-Noise Ratio (PSNR) of reconstructed images about 2 dB on average, and significantly improve the quality of reconstructed image with keeping texture details better for nonlocal self-similar structure is precisely described.
Reference | Related Articles | Metrics
Lane line recognition using region division on structured roads
WANG Yue, FAN Xianxing, LIU Jincheng, PANG Zhenying
Journal of Computer Applications    2015, 35 (9): 2687-2691.   DOI: 10.11772/j.issn.1001-9081.2015.09.2687
Abstract393)      PDF (987KB)(435)       Save
It is difficult to maintain a balance between accuracy and real-time performance of lane line recognition, thus a new lane line recognition method was proposed based on region division. Firstly, an improved OTSU algorithm was applied to segment the edge image; then, feature points in that edge image were extracted by using Progressive Probabilistic Hough Transform (PPHT) algorithm and fitted as a line by using Least Square Method (LSM). Finally, all fitted lines were judged and the possible lines were chosen by using an anti-interference algorithm. Comparative experiments were conducted with three other algorithms mentioned in the references. In addition, an evaluation model was put forward to assess the performance of the algorithms when dealing with 500 typical lane images. Meanwhile, by calculating the average overhead time on processing each frame of a 1 min 26 s video, the response time of the algorithm was evaluated. The experimental results show that three indexes including precision, recall rate and F value of the proposed algorithm are better than the comparison algorithm, and the proposed algorithm also meets the requirement of real-time processing.
Reference | Related Articles | Metrics
Transmission channel selection of cognitive radio networks based on quality optimization for video streaming
LIU Jinxia, CHEN Lianna, LIU Yanwei, WANG Zunyi, PENG Guangchao
Journal of Computer Applications    2015, 35 (6): 1527-1530.   DOI: 10.11772/j.issn.1001-9081.2015.06.1527
Abstract568)      PDF (789KB)(381)       Save

The conventional channel selection approach in cognitive radio network is random selection of the transmission channel based on the channel characteristics while neglecting the channel quality requirement of the video streaming at the application layer. Aiming at solving this problem, a cross-layer optimized channel selection method targeting on the video streaming quality optimization was presented. Via minimizing the end-to-end video distortion, the video encoding quantization parameter at the application layer and the adaptive modulation and channel coding as well as the specific transmission channel at the physical layer were jointly selected. A large number of video transmission simulation experiments were made for the proposed algorithm over the multi-channel cognitive radio networks. The experimental results show that this kind of cross-layer optimized channel selection approach can efficiently improve the objective quality of second user video streaming more than 1.5 dB over the cross-layer optimization without channel-selection method over cognitive radio networks.

Reference | Related Articles | Metrics
Smart meter software function test model based on disassembly technique
LIU Jinshuo, WANG Xiebing, CHEN Xin, DENG Juan
Journal of Computer Applications    2015, 35 (2): 555-559.   DOI: 10.11772/j.issn.1001-9081.2015.02.0555
Abstract508)      PDF (776KB)(368)       Save

During the procedure of smart meter production, electric power enterprises have noticed the fact that there exist significant differences between sample meters used to check and batch meters for large numbers of production. Lots of batch meters either have an unstable working state or become quality rejected, resulting from lack of detection. Maintenance of these meters causes unnecessary expense. Aiming at this problem, a smart meter software function test scheme was formulated and an embedded smart meter code reversal model was figured out. Taking obtaining system operating characteristics via analysis of smart meter kernel program as main idea, the model operated a software function difference test on smart meter with disassembly technology as means to analyze smart meter firmware code function. The model included three modules, namely firmware code extraction, firmware code disassembly and software function comparison. A Single-step Disassembly Algorithm (SDA) was adopted in firmware code disassembly module based on traditional linear sweep and recursive scanning algorithm. It has remarkable effects when applying the model to sample and batch meters identification. Meanwhile, the model can control function and quality error within 20 percent when maintaining meters of used and to be used.

Reference | Related Articles | Metrics
Fault tolerance as a service method in cloud platform based on virtual machine deployment policy
LIU Xiaoxia, LIU Jing
Journal of Computer Applications    2015, 35 (12): 3530-3535.   DOI: 10.11772/j.issn.1001-9081.2015.12.3530
Abstract448)      PDF (930KB)(262)       Save
Concerning the problem that how to make full use of the resources in cloud infrastructure to satisfy various and high reliable fault tolerant requirements of cloud application systems for cloud application tenants, a cloud application tenant and service provider oriented fault tolerance as a service method in cloud platform was proposed based on virtual machine deployment policy. According to specific fault tolerant requirements from cloud application tenants, suitable fault tolerance methods with corresponding fault tolerant levels were adopted. Then, the revenue and resource usage of service provider were computed and optimized. Based on the analysis, virtual machines which providing fault tolerant services were well deployed, which could make full use of the resources in virtual machine level to provide more reliable fault tolerant services for cloud application systems and their tenants. The experimental results show that the proposed method could guarantee the revenue of service providers, and achieve more flexible and more reliable fault tolerant services for cloud application systems with multiple tenants.
Reference | Related Articles | Metrics
Quay crane allocation and scheduling joint optimization model for single ship
ZHENG Hongxing, WU Yue, TU Chuang, LIU Jinping
Journal of Computer Applications    2015, 35 (1): 247-251.   DOI: 10.11772/j.issn.1001-9081.2015.01.0247
Abstract497)      PDF (885KB)(478)       Save

This paper proposed a liner programming model to deal with the Quay Crane (QC) allocation and scheduling problem for single ship under the circumstance of fixed berth allocation. With the aim of minimizing the working time of the ship at berth, the model considered not only the disruptive waiting time when the quay cranes were working, but also the workload balance between the cranes. And an Improved Ant Colony Optimization (IACO) algorithm with the embedding of a solution space split strategy was presented to solve the model. The experimental results show that the proper allocation and scheduling of quay cranes from the model in this paper can averagely save 31.86% of the crane resource compared with full application of all available cranes. When comparing to the solution solved by Lingo, the results from IACO algorithm have an average deviation of 5.23%, while the average CPU (Central Processing Unit) computational time is reduced by 78.7%, which shows the feasibility and validity of the proposed model and the algorithm.

Reference | Related Articles | Metrics
Game-theoretic model of active attack on steganographic system
LIU Jing TANG Guangming
Journal of Computer Applications    2014, 34 (3): 720-723.   DOI: 10.11772/j.issn.1001-9081.2014.03.0720
Abstract468)      PDF (548KB)(336)       Save

To solve the problem of active attack on steganographic system, the counterwork relationship was modeled between steganographier and active attacker. The steganographic game with embedding rate and error rate as the payoff function was proposed. With the basic theory of two-person finite zero-sum game, the equilibrium between steganographier and active attacker was analyzed and the method to obtain their strategies in equilibrium was given. Then an example case was solved to demonstrate the ideas presented in the model. This model not only provides the theoretic basis for steganographier and active attacker to determine their optimal strategies, but also brings some guidance for designing steganographic algorithms robust to active attack.

Related Articles | Metrics